Goto

Collaborating Authors

 time 1


Deep Learning Algorithms for Mean Field Optimal Stopping in Finite Space and Discrete Time

Magnino, Lorenzo, Zhu, Yuchen, Laurière, Mathieu

arXiv.org Artificial Intelligence

Optimal stopping is a fundamental problem in optimization that has found applications in risk management, finance, economics, and recently in the fields of computer science. We extend the standard framework to a multi-agent setting, named multi-agent optimal stopping (MAOS), where a group of agents cooperatively solves finite-space, discrete-time optimal stopping problems. Solving the finite-agent case is computationally prohibitive when the number of agents is very large, so this work studies the mean field optimal stopping (MFOS) problem, obtained as the number of agents approaches infinity. We prove that MFOS provides a good approximate solution to MAOS. We also prove a dynamic programming principle (DPP), based on the theory of mean field control. We then propose two deep learning methods: one simulates full trajectories to learn optimal decisions, whereas the other leverages DPP with backward induction; both methods train neural networks for the optimal stopping decisions. We demonstrate the effectiveness of these approaches through numerical experiments on 6 different problems in spatial dimension up to 300. To the best of our knowledge, this is the first work to study MFOS in finite space and discrete time, and to propose efficient and scalable computational methods for this type of problem.


Zebra: In-Context and Generative Pretraining for Solving Parametric PDEs

Serrano, Louis, Koupaï, Armand Kassaï, Wang, Thomas X, Erbacher, Pierre, Gallinari, Patrick

arXiv.org Artificial Intelligence

Solving time-dependent parametric partial differential equations (PDEs) is challenging, as models must adapt to variations in parameters such as coefficients, forcing terms, and boundary conditions. Data-driven neural solvers either train on data sampled from the PDE parameters distribution in the hope that the model generalizes to new instances or rely on gradient-based adaptation and meta-learning to implicitly encode the dynamics from observations. This often comes with increased inference complexity. Inspired by the in-context learning capabilities of large language models (LLMs), we introduce Zebra, a novel generative auto-regressive transformer designed to solve parametric PDEs without requiring gradient adaptation at inference. By leveraging in-context information during both pre-training and inference, Zebra dynamically adapts to new tasks by conditioning on input sequences that incorporate context trajectories or preceding states. This approach enables Zebra to flexibly handle arbitrarily sized context inputs and supports uncertainty quantification through the sampling of multiple solution trajectories. We evaluate Zebra across a variety of challenging PDE scenarios, demonstrating its adaptability, robustness, and superior performance compared to existing approaches.


Aggregation Models with Optimal Weights for Distributed Gaussian Processes

Chen, Haoyuan, Tuo, Rui

arXiv.org Machine Learning

Gaussian process (GP) models have received increasingly attentions in recent years due to their superb prediction accuracy and modeling flexibility. To address the computational burdens of GP models for large-scale datasets, distributed learning for GPs are often adopted. Current aggregation models for distributed GPs are not time-efficient when incorporating correlations between GP experts. In this work, we propose a novel approach for aggregated prediction in distributed GPs. The technique is suitable for both the exact and sparse variational GPs. The proposed method incorporates correlations among experts, leading to better prediction accuracy with manageable computational requirements. As demonstrated by empirical studies, the proposed approach results in more stable predictions in less time than state-of-the-art consistent aggregation models.


L4GM: Large 4D Gaussian Reconstruction Model

Ren, Jiawei, Xie, Kevin, Mirzaei, Ashkan, Liang, Hanxue, Zeng, Xiaohui, Kreis, Karsten, Liu, Ziwei, Torralba, Antonio, Fidler, Sanja, Kim, Seung Wook, Ling, Huan

arXiv.org Artificial Intelligence

We present L4GM, the first 4D Large Reconstruction Model that produces animated objects from a single-view video input - in a single feed-forward pass that takes only a second. Key to our success is a novel dataset of multiview videos containing curated, rendered animated objects from Objaverse. This dataset depicts 44K diverse objects with 110K animations rendered in 48 viewpoints, resulting in 12M videos with a total of 300M frames. We keep our L4GM simple for scalability and build directly on top of LGM [49], a pretrained 3D Large Reconstruction Model that outputs 3D Gaussian ellipsoids from multiview image input. L4GM outputs a per-frame 3D Gaussian Splatting representation from video frames sampled at a low fps and then upsamples the representation to a higher fps to achieve temporal smoothness. We add temporal self-attention layers to the base LGM to help it learn consistency across time, and utilize a per-timestep multiview rendering loss to train the model. The representation is upsampled to a higher framerate by training an interpolation model which produces intermediate 3D Gaussian representations. We showcase that L4GM that is only trained on synthetic data generalizes extremely well on in-the-wild videos, producing high quality animated 3D assets.


An embedding-based distance for temporal graphs

Dall'Amico, Lorenzo, Barrat, Alain, Cattuto, Ciro

arXiv.org Artificial Intelligence

We define a distance between temporal graphs based on graph embeddings built using time-respecting random walks. We study both the case of matched graphs, when there exists a known relation between the nodes, and the unmatched case, when such a relation is unavailable and the graphs may be of different sizes. We illustrate the interest of our distance definition, using both real and synthetic temporal network data, by showing its ability to discriminate between graphs with different structural and temporal properties. Leveraging state-of-the-art machine learning techniques, we propose an efficient implementation of distance computation that is viable for large-scale temporal graphs.


Calibration of Quantum Decision Theory: Aversion to Large Losses and Predictability of Probabilistic Choices

Kovalenko, T., Vincent, S., Yukalov, V. I., Sornette, D.

arXiv.org Artificial Intelligence

We present the first calibration of quantum decision theory (QDT) to a dataset of binary risky choice. We quantitatively account for the fraction of choice reversals between two repetitions of the experiment, using a probabilistic choice formulation in the simplest form without model assumption or adjustable parameters. The prediction of choice reversal is then refined by introducing heterogeneity between decision makers through their differentiation into two groups: ``majoritarian'' and ``contrarian'' (in proportion 3:1). This supports the first fundamental tenet of QDT, which models choice as an inherent probabilistic process, where the probability of a prospect can be expressed as the sum of its utility and attraction factors. We propose to parameterise the utility factor with a stochastic version of cumulative prospect theory (logit-CPT), and the attraction factor with a constant absolute risk aversion (CARA) function. For this dataset, and penalising the larger number of QDT parameters via the Wilks test of nested hypotheses, the QDT model is found to perform significantly better than logit-CPT at both the aggregate and individual levels, and for all considered fit criteria for the first experiment iteration and for predictions (second ``out-of-sample'' iteration). The distinctive QDT effect captured by the attraction factor is mostly appreciable (i.e., most relevant and strongest in amplitude) for prospects with big losses. Our quantitative analysis of the experimental results supports the existence of an intrinsic limit of predictability, which is associated with the inherent probabilistic nature of choice. The results of the paper can find applications both in the prediction of choice of human decision makers as well as for organizing the operation of artificial intelligence.


Learning to Control Local Search for Combinatorial Optimization

Falkner, Jonas K., Thyssens, Daniela, Bdeir, Ahmad, Schmidt-Thieme, Lars

arXiv.org Artificial Intelligence

Combinatorial optimization problems are encountered in many practical contexts such as logistics and production, but exact solutions are particularly difficult to find and usually NP-hard for considerable problem sizes. To compute approximate solutions, a zoo of generic as well as problem-specific variants of local search is commonly used. However, which variant to apply to which particular problem is difficult to decide even for experts. In this paper we identify three independent algorithmic aspects of such local search algorithms and formalize their sequential selection over an optimization process as Markov Decision Process (MDP). We design a deep graph neural network as policy model for this MDP, yielding a learned controller for local search called NeuroLS. Ample experimental evidence shows that NeuroLS is able to outperform both, well-known general purpose local search controllers from Operations Research as well as latest machine learning-based approaches.


Solve Interview Case Studies 10x Faster Using Dynamic Programming

@machinelearnbot

The ability to solve case studies comes with regular practice. Many a times, if you find yourself failing at thinking like a pro, perhaps, it's just because you haven't practiced enough. To help you become confident, I've written multiple case studies in last one month. You can check the recent ones here. If you haven't solved any of them, I'd suggest you to check them out first.


SheffieldML/vargplvm

@machinelearnbot

This repository contains both MATLAB and R code for implementing the Bayesian GP-LVM. The MATLAB code is in the subdirectory vargplvm, the R code in vargplvmR. For a quick description and sample videos / demos check: http://git.io/A3Uv The Bayesian GP-LVM (Titsias and Lawrence, 2010) is an extension of the traditional GP-LVM where the latent space is approximately marginalised out in a variational fashion (hence the prefix'vargplvm'). Let us denote \mathbf{Y} as a matrix of observations (here called outputs) with dimensions n \times p, where n rows correspond to datapoints and p columns to dimensions.


Natural Events

Bell, J.

Journal of Artificial Intelligence Research

This paper develops an inductive theory of predictive common sense reasoning. The theory provides the basis for an integrated solution to the three traditional problems of reasoning about change; the frame, qualification, and ramification problems. The theory is also capable of representing non-deterministic events, and it provides a means for stating defeasible preferences over the outcomes of conflicting simultaneous events.